Artificial intelligence has been increasingly important to our lives – it’s how we unlock our cell phones, it can produce beautiful art, and it even lets Siri interpret our spoken words as written ones. Even if you don’t use artificial intelligence yourself, chances are it’s used on you. It’s used to tailor advertisements with downright creepy accuracy, it has automated away some jobs that used to be tasked by people, and doctors use it as a tool to predict the probability of, say, a tumor being benign or serious.
With all the decision-making power that AI holds over our lives, it better at least be accurate. Unfortunately, it is only as accurate as the humans that program it.
Artificial intelligence is a catch-all term for systems that use algorithms to reliably predict a future result based on how similar inputs like it performed in the past. Since it is up to humans to decide what preliminary data is fed into the AI procedure, any flaws or biases in that data will be reflected in the results from the AI procedure. For example, face-recognition algorithms are mostly trained on white faces and have great difficulty accurately recognizing non-white faces. The consequences of this can range anywhere from not being able to unlock your phone to being unfairly prosecuted because an AI recognized you as someone you are not. Often it feels that AI holds all the power against us but we have to ask: what can we, as consumers of these products, do about it?
To answer this question, Dwork & Minow looked at how it is that people can gain and keep our trust in general. There’s two main aspects to being trustworthy. The first is to hold up your end of the bargain. So if I give my roommate some money to go to the grocery store and buy the items on my list, she would keep my trust by using the money to get the listed groceries. But that’s not all there is to trust. Say there is some money left over – would I trust her if she kept it for herself instead of giving it back to me? Probably not, and this identifies the second main aspect of trust – knowing that my interests are also your interests. This identifies the main problem causing distrust in artificial intelligence – procedures are often built without the feedback of the people that it affects, both at the business and governmental levels. There is no reliable guarantee that my interests are the interests of the people building the AI’s.
At this point, Dwork & Minow propose solutions that would better this issue of distrust. Now that we have identified where it is that AI betrays our trust, that same place can also be an opportunity for AI to gain it back. Letting consumer participation shape AI to a larger extent would provide more of a guarantee that my interests are AI’s interests. The people who will be affected by an AI should have a say in what kinds of data it uses and what methods it uses to come to its conclusions. AI methods, Dwork & Minow contend, should be explainable to the average person that will be affected by them, so we can understand that there was a process to the conclusion the AI returned. All AI’s should have to answer to common people and compete in the market based on trust. Those who are less trustworthy should be less successful. There should be a set of standards that external sources use to gauge the trustworthiness of AI’s, and AI’s should be rated on those guidelines. These measures would introduce accountability and participation that is lacking in AI.
While these measures would certainly improve trust between AI and the public, there are still limitations on how much these measures can be accomplished.
On the issue of explainability, many AI procedures use mathematical procedures that, while there is an explanation behind them, might be lost on the average person. Does this mean that we should only limit AI’s to simple ones? Alternatively, we can try to implement explainability as much as possible, with the caveat that just because not everyone can understand the explanations behind an AI does not mean that those explanations don’t exist.
On the issue of participation, there would have to be a balance between the public opinion and expert knowledge to come to a solution, rather than letting one overshadow the other. While public opinion could certainly improve the fairness of AI’s, we should be careful to not let it overrule expertise to such a point to make AI’s less reliable overall. In both these cases, trust would be improved by implementing these ideals as much as possible, while not implementing them to an absolute extent. Certainly, these would be massive improvements over the current state of AI.
References
Dwork, C., & Minow, M. (2022). Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law. Daedalus, 151(2), 309–321. https://www.jstor.org/stable/48662044